Goto

Collaborating Authors

 facial-recognition software


3 things to watch for in A.I. in 2021

#artificialintelligence

But 2021 will likely be a big year for A.I., and with a new White House administration soon in place, there may be a clearer set of national A.I. policies that will trickle down to the business world. On New Year's Day, the U.S. Senate voted to overturn President Trump's veto of the National Defense Authorization Act and authorize $741 billion for defense spending, including the creation of a number of A.I.-related polices. Among the reasons Trump opposed the defense bill was the absence of a provision to repeal Section 230, which gives legal protections to Internet companies that host user-generated content. Although the defense bill was mostly geared toward military spending, it did contain a number of non-defense related A.I. initiatives, as Stanford University's Human-Centered Artificial Intelligence group outlined. For instance, the bill would create a "National AI Initiative" that would coordinate A.I. research and development between "civilian agencies," the Defense Department, and intelligence agencies.


The unseen Black faces of AI algorithms

#artificialintelligence

Data sets are essential for training and validating machine-learning algorithms. But these data are typically sourced from the Internet, so they encode all the stereotypes, inequalities and power asymmetries that exist in society. These biases are exacerbated by the algorithmic systems that use them, which means that the output of the systems is discriminatory by nature, and will remain problematic and potentially harmful until the data sets are audited and somehow corrected. Although this has long been the case, the first major steps towards overcoming the issue were taken only four years ago, when Joy Buolamwini and Timnit Gebru1 published a report that kick-started sweeping changes in the ethics of artificial intelligence (AI). As a graduate student in computer science, Buolamwini was frustrated that commercial facial-recognition systems failed to identify her face in photographs and video footage.


Ukrainians Use Drones, Facial-Recognition Software as They Probe Alleged War Crimes

WSJ.com: WSJD - Technology

BUCHA, Ukraine--Searching for evidence in the killings of hundreds of people by Russian troops here, Ukrainian prosecutor Ruslan Kravchenko unlocked the double doors leading to a boiler room on the south side of town. The space had been used as an office by the occupying forces. Two weeks after Russia's retreat from areas around the Ukrainian capital, local and national authorities are embarking on a wide-ranging probe of alleged war crimes with the aim of building cases strong enough to persuade an international court to hold the Kremlin and its soldiers responsible.


These high school students are fighting for ethical AI

#artificialintelligence

It's been a busy year for Encode Justice, an international group of grassroots activists pushing for ethical uses of artificial intelligence. There have been legislators to lobby, online seminars to hold, and meetings to attend, all in hopes of educating others about the harms of facial-recognition technology. It would be a lot for any activist group to fit into the workday; most of the team behind Encode Justice have had to cram it all in around high school. That's because the group was created and is run almost entirely by high schoolers. Its founder and president, Sneha Revanur, is a 16-year-old high-school senior in San Jose, California and at least one of the members of the leadership team isn't old enough to get a driver's license.


Facebook is trying to make AI fairer by paying people to give it data

#artificialintelligence

Artificial intelligence systems are often criticized for built-in biases. Commercial facial-recognition software, for instance, may fail when attempting to classify women and people of color. In an effort to help make AI fairer in a variety of ways, Facebook (FB) is rolling out a new data set for AI researchers that includes a diverse group of paid actors who were explicitly asked to provide their own ages and genders. Facebook hopes researchers will use the open-source data set, which it announced Thursday, to help judge whether AI systems work well for people of different ages, genders, skin tones, and in different types of lighting. Facebook also released the data set internally for use within Facebook itself; the company said in a blog post that it is "encouraging" teams to use it.


Faces Are the Next Target for Fraudsters

#artificialintelligence

In the past year, thousands of people in the U.S. have tried to trick facial identification verification to fraudulently claim unemployment benefits from state workforce agencies, according to identity verification firm ID.me Inc. The company, which uses facial-recognition software to help verify individuals on behalf of 26 U.S. states, says that between June 2020 and January 2021 it found more than 80,000 attempts to fool the selfie step in government ID matchups among the agencies it worked with. That included people wearing special masks, using deepfakes--lifelike images generated by AI--or holding up images or videos of other people, says ID.me Chief Executive Blake Hall. A look at how innovation and technology are transforming the way we live, work and play. Facial recognition for one-to-one identification has become one of the most widely used applications of artificial intelligence, allowing people to make payments via their phones, walk through passport checking systems or verify themselves as workers.


Four AI technologies that could transform the way we live and work

Nature

Joy Buolamwini from the MIT Media Lab says facial-recognition software has the highest error rates for darker-skinned females. New applications powered by artificial intelligence (AI) are being embraced by the public and private sectors. Their early uses hint at what's to come. In June 2020, IBM, Amazon and Microsoft announced that they were stepping back from facial-recognition software development amid concerns that it reinforces racial and gender bias. Amazon and Microsoft said they would stop selling facial-recognition software to police until new laws are passed in the United States to address potential human-rights abuses.


Face recognition isn't just for humans -- it's learning to identify bears and cows, too

#artificialintelligence

San Francisco (CNN Business)It's hard for the average person to tell Dani, Lenore, and Bella apart: They all sport fashionably fuzzy brown coats and enjoy a lot of the same activities, like playing in icy-cold water and, occasionally, ripping apart a freshly caught fish. Melanie Clapham is not the average person. As a bear biologist, she has spent over a decade studying these grizzly bears, who live in Knight Inlet in British Columbia, Canada, and developed a sense for who is who by paying attention to little things that make them different. "I use individual characteristics -- say, one bear has a nick in its ear or a scar on the nose," she said. But Clapham knows most people don't have her eye for detail, and the bears' appearances change dramatically over the course of a year -- such as when they get winter coats and fatten up before denning -- which makes it even harder to distinguish between, say, Toffee and Blonde Teddy.


Controversial facial-recognition software used 30,000 times by LAPD in last decade, records show

Los Angeles Times

The Los Angeles Police Department has used facial-recognition software nearly 30,000 times since 2009, with hundreds of officers running images of suspects from surveillance cameras and other sources against a massive database of mugshots taken by law enforcement. The new figures, released to The Times, reveal for the first time how commonly facial recognition is used in the department, which for years has provided vague and contradictory information about how and whether it uses the technology. The LAPD has consistently denied having records related to facial recognition, and at times denied using the technology at all. The truth is that, while it does not have its own facial-recognition platform, LAPD personnel have access to facial-recognition software through a regional database maintained by the Los Angeles County Sheriff's Department. And between Nov. 6, 2009, and Sept. 11 of this year, LAPD officers used the system's software 29,817 times.


Think your mask makes you invisible to facial recognition? Not so fast, AI companies say

#artificialintelligence

The future of facial recognition technology may depend on one very specific part of the face: the area around the eyes. Before the global pandemic, facial recognition systems typically worked by comparing measurements between different facial features in one image to those in another picture. But when you're wearing a mask over your nose, mouth, and cheeks, you're offering up a fraction of the information normally used to figure out your identity. Now, numerous facial recognition companies say they are focusing on better identifying people based on the portion of the face above the nose and, in particular, the eye region. The stakes are high to get it right, and soon.